LLM Gateway

For instructions on how to authenticate to use this endpoint, see API overview.

Endpoints

POST
POST

Create llm gateway v1 chat completions

Required API key scopes

task:write

Path parameters

  • project_id
    string

Query parameters

  • format
    string
    One of: "json""txt"

Request parameters

  • model
    string
  • messages
    array
  • temperature
    number
  • top_p
    number
  • n
    integer
  • stream
    boolean
    Default: false
  • stream_options
  • stop
    array
  • max_tokens
    integer
  • max_completion_tokens
    integer
  • presence_penalty
    number
  • frequency_penalty
    number
  • logit_bias
  • user
    string
  • tools
    array
  • tool_choice
  • parallel_tool_calls
    boolean
  • response_format
  • seed
    integer
  • logprobs
    boolean
  • top_logprobs
    integer
  • modalities
    Click to open
    array
  • prediction
  • audio
  • reasoning_effort
  • verbosity
  • store
    boolean
  • web_search_options
  • functions
    array
  • function_call

Response


Example request

POST /api/projects/:project_id/llm_gateway/v1/chat/completions
export POSTHOG_PERSONAL_API_KEY=[your personal api key]
curl
-H 'Content-Type: application/json'\
-H "Authorization: Bearer $POSTHOG_PERSONAL_API_KEY" \
<ph_app_host>/api/projects/:project_id/llm_gateway/v1/chat/completions/\
-d model="string",\
-d messages="array"

Example response

Status 200 Successful response with chat completion
RESPONSE
{
"id": "string",
"object": "chat.completion",
"created": 0,
"model": "string",
"choices": [
{
"index": 0,
"message": {
"role": "system",
"content": "string",
"name": "string",
"function_call": null,
"tool_calls": [
null
]
},
"finish_reason": "string"
}
],
"usage": {
"prompt_tokens": 0,
"completion_tokens": 0,
"total_tokens": 0,
"completion_tokens_details": null,
"prompt_tokens_details": null
},
"system_fingerprint": "string",
"service_tier": "auto"
}
Status 400 Invalid request parameters
RESPONSE
{
"error": {
"property1": null,
"property2": null
}
}
Status 500 Internal server error
RESPONSE
{
"error": {
"property1": null,
"property2": null
}
}

Create llm gateway v1 messages

Create a message using Anthropic's Claude models. Compatible with Anthropic's Messages API format.

Required API key scopes

task:write

Path parameters

  • project_id
    string

Query parameters

  • format
    string
    One of: "json""txt"

Request parameters

  • model
    string
  • messages
    array
  • max_tokens
    integer
    Default: 4096
  • temperature
    number
  • top_p
    number
  • top_k
    integer
  • stream
    boolean
    Default: false
  • stop_sequences
    array
  • system
  • metadata
  • thinking
  • tools
    array
  • tool_choice
  • service_tier

Response


Example request

POST /api/projects/:project_id/llm_gateway/v1/messages
export POSTHOG_PERSONAL_API_KEY=[your personal api key]
curl
-H 'Content-Type: application/json'\
-H "Authorization: Bearer $POSTHOG_PERSONAL_API_KEY" \
<ph_app_host>/api/projects/:project_id/llm_gateway/v1/messages/\
-d model="string",\
-d messages="array"

Example response

Status 200 Successful response with generated message
RESPONSE
{
"id": "string",
"type": "message",
"role": "assistant",
"content": [
{
"property1": null,
"property2": null
}
],
"model": "string",
"stop_reason": "end_turn",
"stop_sequence": "string",
"usage": {
"input_tokens": 0,
"output_tokens": 0,
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 0,
"server_tool_use": null,
"service_tier": "standard"
}
}
Status 400 Invalid request parameters
RESPONSE
{
"error": {
"property1": null,
"property2": null
}
}
Status 500 Internal server error
RESPONSE
{
"error": {
"property1": null,
"property2": null
}
}

Community questions

Questions about this page? or post a community question.